Mintaka-Qwen3-1.6B-V3.1 is an efficient model focused on scientific reasoning. It is built on Qwen-1.6B and trained on the DeepSeek v3.1 synthetic trajectory (10,000 records). It is optimized for random event simulation, logical problem analysis, and structured scientific reasoning, striking a balance between symbolic precision and lightweight deployment.
Natural Language Processing
TransformersEnglish